Replies: 1 comment
-
We have an example for that in our help patches, you can see the setup visually in vvvv gamma. start it, open the HelpBrowser (F1) go to the "Learn" tab, and look for "HowTo Write into a VertexBuffer in a Compute Shader". The gist of it is to have a UAV buffer with the RawBuffer flag and write the position and/or normal data into the buffer in a compute shader and then bind this buffer as vertex buffer to a model. of course, you can also write into textures in a compute shader. The compute shader only has to run when data changes, the execution/draw model is up to you. You will probably also need some of our buffer extensions to create the view: https://github.com/vvvv/VL.Stride/tree/preview/2021.4/packages/VL.Stride.Runtime/src/Graphics The downside is, that this is only tested on DX11 windows, i don't know whether that works on other graphics APIs or other platforms as we have never tested that. |
Beta Was this translation helpful? Give feedback.
-
How do you use "Compute Shader" with Stride3D? What we need to achieve is sending "just the raw points to the GPU Compute Shader" and then having it create the VertexBuffer to be used by our Vertex Shaders directly, and avoid sending all that VertexData back to the CPU, only to be re-blitted back to the GPU. Our app is very line-centric, using real-world map data, and so just solving this issue ONCE, will then be re-used for all Line-Layers (20 of them).
Additionally, and more importantly, we need this for Calculating Real-Earth Terrain Elevation Normals -- right now we do this massively on the CPU. For Elevation Normals, we would want to use our Compute Shader to render it's results directly to a Texture, that will be used by Pixel Shader... but we DO NOT WANT to regenerate these Normals EVERY FRAME (they NEVER change).
We fill the screen constantly with Terrain Normals at about half-pixel resolution (1 normal for every other pixel on screen). So a screen size 1000x2000, we produce normals for tiles that span larger-than-screen, or about 1500x3000, we're generating about 1.2M Normals at once. As you zoom in/out, we quickly load in a new 1.2M Elevations+Normals from a higher/lower resolution database.
We store these Normals onto 256x256 Textures, and so have up to 80 built at a time (we build around the fringe too, so that if you pan the map 25%, you won't have to wait to see the results). And we keep 200 tiles in our cache for re-use so that we don't thrash the GC.
Currently we are building these Normals in C# on the CPU, on a background thread, and this produces about 80% of our lag, and doubles the RAM-size requirement.
If we can get Stride3D's Compute Shader do all of this processing on the GPU, ONCE each time a tile's elevation data is updated, then it would speed up our presentation of new Elevation+Normals by about 5X, and also cuts our RAM demand by half (allowing us to cache more data)! THIS WOULD BE AWESOME. It would enable us to show higher-resolution terrain, fast and efficient.
Urho.NET offers no support for Geometry Shaders, so this would be a competitive advantage of Stride3D over Urho3D.
Here is a screenshot of our Map showing Real-Earth Terrain, highlighting the higher elevations with Red/Yellow. The shading is accomplished via Normals. This is currently being done by Urho.NET:
Beta Was this translation helpful? Give feedback.
All reactions