Implementing algorithms via compute shaders vs. pipeline shaders
        Posted  
        
            by 
                TravisG
            
        on Game Development
        
        See other posts from Game Development
        
            or by TravisG
        
        
        
        Published on 2013-10-27T13:36:38Z
        Indexed on 
            2013/10/27
            16:01 UTC
        
        
        Read the original article
        Hit count: 765
        
Performance
|GPGPU
With the availability of compute shaders for both DirectX and OpenGL it's now possible to implement many algorithms without going through the rasterization pipeline and instead use general purpose computing on the GPU to solve the problem.
For some algorithms this seems to become the intuitive canonical solution because they're inherently not rasterization based, and rasterization-based shaders seemed to be a workaround to harness GPU power (simple example: creating a noise texture. No quad needs to be rasterized here).
Given an algorithm that can be implemented both ways, are there general (potential) performance benefits over using compute shaders vs. going the normal route? Are there drawbacks that we should watch out for (for example, is there some kind of unusual overhead to switching from/to compute shaders at runtime)?
Are there perhaps other benefits or drawbacks to consider when choosing between the two?
© Game Development or respective owner