Skip to content

Commit

Permalink
Add some docs
Browse files Browse the repository at this point in the history
Not sure if these docs entirely make sense, but hopefully they introduce some concepts to newer
users from rust who want to use GPU.
  • Loading branch information
JulianKnodt committed Apr 16, 2024
1 parent 54f6978 commit 016282e
Show file tree
Hide file tree
Showing 4 changed files with 136 additions and 1 deletion.
5 changes: 4 additions & 1 deletion docs/src/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,10 @@
Welcome to the Rust-GPU dev guide! This documentation is meant for documenting
how to use and develop on Rust-GPU.

If you're looking to get started with writing your own shaders in Rust,
If you're a developer looking to learn about shaders through rust-gpu, check out [_"What are
Shaders"_](./shaders-intro.md).

If you're looking to get started with writing your own shaders in Rust,
check out the [_"Writing Shader Crates"_](./writing-shader-crates.md) section for
more information on how to get started.

Expand Down
95 changes: 95 additions & 0 deletions docs/src/shaders-intro.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# What is a Shader?

"Shaders" are programs which historically rendered the "shades" (colors & light) of a 3D scene,
but now they generally refer to programs that run on the GPU. They can refer to general purpose
compute shaders, as an example for ML, or it can be a fixed function pipeline, such as
rasterizing a triangle, or running a function per mesh vertex.

# Rendering Shaders

Shaders were originally designed for rendering [triangle
meshes](https://en.wikipedia.org/wiki/Triangle_mesh) from the perspective of a camera.
There have traditionally between two distinct ways to render a mesh: rasterization and
raytracing, and we'll explain both.

Raytracing is a [physical
model](https://pbrt.org/) of light transport, and can be understood concisely as attempting to
model the real world. Some modern GPUs [support raytracing](https://developer.nvidia.com/rtx/raytracing/dxr/dx12-raytracing-tutorial-part-2), but as compared to rasterization
pipelines, they are more immature and have much less history (as of 2024). Raytracing is not
covered by this doc.

Rasterization is intended to emulate the appearance of a triangle, but more efficiently
than raytracing. Triangle vertices are represented as a 3D position in world space (with origin
as reference point), which are then translated into a position relative to a camera's view
(camera as reference point with rotated axes). For orthogonal or perspective cameras which are
the standard cameras used in rendering, the transformations from camera space to screen space
are defined by
[matrices](https://learnwebgl.brown37.net/08_projections/projections_perspective.html).
Generally, this step of projecting is done in the *vertex shader* which will transform a buffer
of vertices passed to the GPU and apply these transformations. Each vertex also has a relative
depth in screen space, so when triangles are rendered later, the closest triangles to the camera
can be rendered exclusively. The most common next stage is a *fragment shader*, which determines
what each part of visible triangles get rendered. An oversimplification but reasonable statement
is that a fragment shader takes the nearest visible triangle to the camera in that pixel and
determines a color for it. These shaders will write directly to an output buffer, such as a
window, or an output image.

# Compute Shaders

When someone talks about running ML code on the GPU, they are usually referring to compute
shaders in CUDA, which are general purpose programs, not specifically related to rendering.

Unlike rendering related shaders, compute shaders can be run on arbitrary data, but
compute shaders are generally run on 1D, 2D or 3D data. To handle this well, compute shaders are
invoked with some number of "local work groups". For example, it be (5, 5, 5) for handling a 5x5x5
matrix, or a (128, 128, 1) for handling a 128x128 image.

I/O to compute shaders must be done through explicit buffers or textures, as opposed to
rendering shaders which will have input from prior shaders.

As an example:
```
#[spirv(compute(threads(4,4,1)))]
pub fn handle_compute(
#[spirv(global_invocation_id)] global_id: UVec3,
#[spirv(local_invocation_id)] local__id: UVec3,
) {
// step 1: use global_id and local_id to figure out which part of work needs to be done
// step 2: do the work
// step 3: ...
// step 4: profit
}
```

## Usage Details

The way GLSL differentiate between shader kinds is usually that they are contained within fully
separate files and compiled separately. In Rust-GPU, they are marked with attributes:

```rust
#[spirv(fragment)]
fn fragment_shader(...) {

}

#[spirv(vertex)]
fn vertex_shader(...) {

}
```

The supported attributes are:`vertex, fragment, geometry, tesselation_control,
tessellation_evaluation`. The full list is in `crates/rustc_codegen_spirv/src/symbols.rs`.

# Why Shaders?

Shaders are generally preferred to code run on the CPU for programs where a lot of communication
need not be done between them. For rendering, each face can be projected in parallel, and then
the shade of each face can be computed separately. For compute shaders, it depends heavily on
the task, but for matrix multiplies and other tensor operations, there are efficient ways to
parallelize and implement them on GPU.

References:
- [Wikipedia](https://en.wikipedia.org/wiki/Shader#Ray_tracing_shaders)
- [The Book of (Fragment) Shaders](https://thebookofshaders.com/)
- [Compute Shaders](https://www.khronos.org/opengl/wiki/Compute_Shader)
16 changes: 16 additions & 0 deletions examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,24 @@
The examples here are split into a few categories:

- The shaders folder contain various rust-gpu shaders, and are examples of how to use rust-gpu.
- The primary shader used as an example is the Sky Shader, which renders a single quad, and on
this quad renders a sky.


- The runners folder contains programs that build and execute the shaders in the shaders folder using, for example,
Vulkan. These programs are not exactly examples of how to use rust-gpu, as they're rather generic vulkan sample apps,
but they do contain some infrastructure examples of how to integrate rust-gpu shaders into a build system (although
both aren't the cleanest of examples, as they're also testing some of the more convoluted ways of consuming rust-gpu).
- Finally, the multibuilder folder is a very short sample app of how to use the `multimodule` feature of `spirv-builder`.

## Running an example

Each example can be run as follows:
```
# can substitute wgpu with cpu or ash
cd examples/runners/wgpu
cargo run --release
```

This should automatically build and compile rust-gpu, then compile the shader, then start the
shader.
21 changes: 21 additions & 0 deletions examples/runners/wgpu/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# WGPU Runner

This is a general runner for a fragment shader based on the [wgpu](https://wgpu.rs/) library
which is responsible for creating a target for rendering.

## `build.rs`

The `build.rs` file is responsible for running the builder, which will compile all the shaders
with targeting SPIR-V. This will use the SpirvBuilder from Rust-GPU to compile each of the
example shaders.

## Core functionality

The main file delegates immediately to `lib.rs` which will either call the compute shader or the
graphics shader depending on flags passed.

In the graphics file, the entry point is `start`, which first creates an event loop for handling
mouse clicks, and then creates a window using `winit`. It then performs set up for the window,
which allows for the window to immediately rendered to from the GPU.


0 comments on commit 016282e

Please sign in to comment.