Cheap ways to do scaling ops in shader?

Posted by Nick Wiggill on Game Development See other posts from Game Development or by Nick Wiggill
Published on 2012-11-30T16:28:30Z Indexed on 2012/11/30 17:21 UTC
Read the original article Hit count: 192

Filed under:
|
|
|

I've got an extensive world terrain that uses vec3 for the vertex position attribute. That's good, because the terrain has endless gradations due to the use of floating point. But I'm thinking about how to reduce the amount of data uploaded to the GPU. For my terrain, which uses discrete / grid-based vertex positions in x and z, it's pretty clear that I can replace my vec3s (floats, really) with shorts, halving the per-vertex position attribute cost from 12 bytes each to 6 bytes. Considering I've got little enough other vertex data, and an enormous amount of terrain data to push into the world, it's a major gain.

Currently in my code, one unit in GLSL shaders is equal to 1m in the world. I like that scale. If I move over to using shorts, though, I won't be able to use the same scale, as I would then have a very blocky world where every step in height is an entire metre.

So I see these potential solutions to scale the positional data correctly once it arrives at the vertex shader stage:

  • Use 10:1 scaling, i.e. 1 short unit = 1 decimetre in CPU-side code. Do a division by 10 in the vertex shader to scale incoming decimetre values back to metres. Arbirary (non-PoT) divisions tend to be slow, however.

  • Use (some-power-of-two):1 scaling (eg. 8:1), which enables the use of a bitshift (eg. val >> 3) to do the division... not sure how performant this is in shaders, though. Not as intuitive to read values, but possibly quite a bit faster than div by a non-PoT value.

  • Use a texture as lookup table. I've heard that this is really fast.

Or whatever solutions others can offer to achieve the same results -- minimal vertex data with sensible scaling.

© Game Development or respective owner

Related posts about opengl

Related posts about 3d