Skip to content

Commit 3ca80cd

Browse files
committed
Add taichiCourse01 & marching_squares release tests
1 parent 373e32c commit 3ca80cd

108 files changed

Lines changed: 2799 additions & 852 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/workflows/scripts/ghstack-perm-check.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -92,8 +92,10 @@ def must(cond, msg):
9292
for cr in checkruns["check_runs"]:
9393
status = cr.get("conclusion", cr["status"])
9494
name = cr["name"]
95+
if name == "Copilot for PRs":
96+
continue
9597
must(
96-
status == "success",
98+
status in ("success", "neutral"),
9799
f"PR #{n} check-run `{name}`'s status `{status}` is not success!",
98100
)
99101
print("SUCCESS!")

.github/workflows/scripts/ti_build/entry.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ def build_wheel(python: Command, pip: Command) -> None:
3232
Build the Taichi wheel
3333
"""
3434

35-
git.fetch("origin", "master", "--tags")
35+
git.fetch("origin", "master", "--tags", "--force")
3636
proj_tags = []
3737
extra = []
3838

.github/workflows/scripts/unix-perf-mon.sh

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ export ANDROID_NDK_ROOT=/android-sdk/ndk-bundle
1717
python -m pip uninstall -y taichi taichi-nightly || true
1818
python -m pip install dist/*.whl
1919

20+
TAG=$(git describe --exact-match --tags 2>/dev/null || true)
21+
2022
git clone --depth=1 https://github.com/taichi-dev/taichi_benchmark
2123

2224
cd taichi_benchmark
@@ -28,7 +30,6 @@ if [ "$GITHUB_EVENT_ACTION" == "benchmark-command" ]; then
2830
cd ..
2931
python .github/workflows/scripts/post-benchmark-to-github-pr.py /github-event.json result.json
3032
else
31-
TAG=$(git describe --exact-match --tags 2>/dev/null || true)
3233
if [ ! -z "$TAG" ]; then
3334
MORE_TAGS="--tags type=release,release=$TAG"
3435
else

c_api/include/taichi/taichi_opengl.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,8 @@ typedef struct TiOpenglImageInteropInfo {
3232

3333
// Function `ti_import_opengl_runtime`
3434
TI_DLL_EXPORT TiRuntime TI_API_CALL
35-
ti_import_opengl_runtime(TiOpenglRuntimeInteropInfo *interop_info);
35+
ti_import_opengl_runtime(TiOpenglRuntimeInteropInfo *interop_info,
36+
bool use_gles);
3637

3738
// Function `ti_export_opengl_runtime`
3839
TI_DLL_EXPORT void TI_API_CALL

c_api/src/taichi_opengl_impl.cpp

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,11 +18,22 @@ taichi::lang::gfx::GfxRuntime &OpenglRuntime::get_gfx_runtime() {
1818
return gfx_runtime_;
1919
}
2020

21+
TiRuntime ti_import_opengl_runtime(TiOpenglRuntimeInteropInfo *interop_info,
22+
bool use_gles) {
23+
TI_CAPI_TRY_CATCH_BEGIN();
24+
TI_CAPI_ARGUMENT_NULL_RV(interop_info);
25+
taichi::lang::opengl::imported_process_address = interop_info->get_proc_addr;
26+
taichi::lang::opengl::set_gles_override(use_gles);
27+
TI_CAPI_TRY_CATCH_END();
28+
return ti_create_runtime(TI_ARCH_OPENGL, 0);
29+
}
30+
2131
void ti_export_opengl_runtime(TiRuntime runtime,
2232
TiOpenglRuntimeInteropInfo *interop_info) {
2333
TI_CAPI_TRY_CATCH_BEGIN();
2434
// FIXME: (penguinliogn)
25-
interop_info->get_proc_addr = taichi::lang::opengl::kGetOpenglProcAddr;
35+
interop_info->get_proc_addr =
36+
taichi::lang::opengl::kGetOpenglProcAddr.value();
2637
TI_CAPI_TRY_CATCH_END();
2738
}
2839

docs/cover-in-ci.lst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
docs/design/llvm_sparse_runtime.md
22
docs/lang/articles/about/overview.md
3+
docs/lang/articles/advanced/argument_pack.md
34
docs/lang/articles/advanced/data_oriented_class.md
45
docs/lang/articles/advanced/dataclass.md
56
docs/lang/articles/advanced/meta.md
Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
---
2+
sidebar_position: 6
3+
---
4+
5+
# Taichi Argument Pack
6+
7+
Taichi provides custom [argpack types](../type_system/type.md#argument-pack-type) for developers to cache unchanged parameters between multiple kernel calls.
8+
9+
Argument packs, also known as argpacks, are user-defined data types that act as wrappers for parameters. They allow multiple parameters to be stored and used as a single parameter. One key advantage of using argpacks is their ability to buffer parameters. If you have certain parameters that remain unchanged when calling kernels, you can store them in argpacks. Taichi can then cache these argpacks, resulting in improved program performance by making it faster.
10+
11+
## Creation and Initialization
12+
13+
You can use the function `ti.types.argpack()` to create an argpack type, which can be utilized to represent view params. The following is an example of defining a Taichi argument pack:
14+
15+
```python
16+
view_params_tmpl = ti.types.argpack(view_mtx=ti.math.mat4, proj_mtx=ti.math.mat4, far=ti.f32)
17+
```
18+
19+
You can then use this `view_params_tmpl` to initialize an argument pack with given values.
20+
21+
```python cont
22+
view_params = view_params_tmpl(
23+
view_mtx=ti.math.mat4(
24+
[[1, 0, 0, 0],
25+
[0, 1, 0, 0],
26+
[0, 0, 1, 0],
27+
[0, 0, 0, 1]]),
28+
proj_mtx=ti.math.mat4(
29+
[[1, 0, 0, 0],
30+
[0, 1, 0, 0],
31+
[0, 0, 1, 0],
32+
[0, 0, 0, 1]]),
33+
far=1)
34+
```
35+
36+
## Pass Argument Packs to Kernels
37+
38+
Once argument packs are created and initialized, they can be easily used as kernel parameters. Simply pass them to the kernel, and Taichi will intelligently cache them (if their values remain unchanged) across multiple kernel calls, optimizing performance.
39+
40+
```python cont
41+
@ti.kernel
42+
def p(view_params: view_params_tmpl) -> ti.f32:
43+
return view_params.far
44+
45+
46+
print(p(view_params)) # 1.0
47+
```
48+
49+
## Limitations
50+
51+
Argpacks are not commonly used types. They are primarily designed as parameter containers, which naturally impose certain limitations on their usage:
52+
53+
- Argpacks can only be used as kernel parameters
54+
- Argpacks cannot be used as return types
55+
- Argpacks cannot be nested in Compound Types, but can be nested in other argpacks.
56+
57+
:::note
58+
59+
While argument pack interfaces are currently supported in Taichi, the internal caching mechanism is still being developed. This feature is planned to be implemented in future versions of Taichi.
60+
61+
:::

docs/lang/articles/advanced/dataclass.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,12 @@ Taichi provides custom [struct types](../type_system/type.md#compound-types) for
1212

1313
To achieve the ends, Taichi enabled the `@ti.dataclass` decorator on a Python class. This is inspired by Python's [dataclass](https://docs.python.org/3/library/dataclasses.html) feature, which uses class fields with annotations to create data types.
1414

15+
:::note
16+
17+
The `dataclass` in Taichi is simply a wrapper for `ti.types.struct`. Therefore, the member types that a `dataclass` object can contain are the same as those allowed in a struct. They must be one of the following types: scalars, matrix/vector types, and other dataclass/struct types. Objects like `field`, `Vector field`, and `Ndarray` cannot be used as members of a `dataclass` object.
18+
19+
:::
20+
1521
## Create a struct from a Python class
1622

1723
The following is an example of defining a Taichi struct type under a Python class:
@@ -72,6 +78,7 @@ get_area() # 201.062...
7278
## Notes
7379

7480
- Inheritance of Taichi dataclasses is not supported.
81+
- Default values in Taichi dataclasses are not supported.
7582
- While it is convenient and recommended to associate functions with a struct defined via `@ti.dataclass`, `ti.types.struct` can serve the same purpose with the help of the `__struct_methods` argument. As mentioned above, the two methods of defining a struct type produce identical output.
7683

7784
```python

docs/lang/articles/basic/ndarray.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -223,3 +223,12 @@ test(e) # New kernel compilation
223223
```
224224

225225
The compilation rule also applies to external arrays from NumPy or PyTorch. Changing the shape values does not trigger compilation, but changing the data type or the number of array dimensions does.
226+
227+
228+
## FAQ
229+
230+
### How to use automatic differentiation with ndarrays?
231+
232+
We recommend referring to [this project](https://github.com/taichi-dev/taichi-nerfs/blob/main/notebooks/autodiff.ipynb).
233+
234+
Currently, the support for automatic differentiation in Taichi's ndarray is still incomplete. We are working on improving this functionality and will provide a more detailed tutorial as soon as possible.

docs/lang/articles/kernels/kernel_function.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -348,3 +348,23 @@ Type hinting is a formal solution to statically indicate the type of value withi
348348
#### Can I call a kernel from within a Taichi function?
349349

350350
No. Keep in mind that a kernel is the smallest unit for Taichi's runtime execution. You cannot call a kernel from within a Taichi function (in the Taichi scope). You can *only* call a kernel from the Python scope.
351+
352+
#### Can I specify different backends for each kernel separately?
353+
354+
Currently, Taichi does not support using multiple different backends simultaneously. Specifically, at any given time, Taichi only uses one backend. While you can call `ti.init()` multiple times in a program to switch between the backends, after each `ti.init()` call, all kernels will be recompiled to the new backend. For example:
355+
356+
```python
357+
ti.init(arch=ti.cpu)
358+
359+
@ti.kernel
360+
def test():
361+
print(ti.sin(1.0))
362+
363+
test()
364+
365+
ti.init(arch=ti.gpu)
366+
367+
test()
368+
```
369+
370+
In the provided code, we begin by designating the CPU as the backend, upon which the `test` function operates. Notably, the `test` function is initially executed on the CPU backend. As we proceed by invoking `ti.init(arch=ti.gpu)` to designate the GPU as the backend, all ensuing invocations of `test` trigger a recompilation of the `test` kernel tailored for the GPU backend, subsequently executing on the GPU. To conclude, Taichi does not facilitate the concurrent operation of multiple kernels on varied backends.

0 commit comments

Comments
 (0)