Search Results

Search found 173 results on 7 pages for 'framebuffer'.

Page 2/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • Unable to get Stencil Buffer to work in iOS 4+ (5.0 works fine). [OpenGL ES 2.0]

    - by MurderDev
    So I am trying to use a stencil buffer in iOS for masking/clipping purposes. Do you guys have any idea why this code may not work? This is everything I have associated with Stencils. On iOS 4 I get a black screen. On iOS 5 I get exactly what I expect. The transparent areas of the image I drew in the stencil are the only areas being drawn later. Code is below. This is where I setup the frameBuffer, depth and stencil. In iOS the depth and stencil are combined. -(void)setupDepthBuffer { glGenRenderbuffers(1, &depthRenderBuffer); glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_OES, self.frame.size.width * [[UIScreen mainScreen] scale], self.frame.size.height * [[UIScreen mainScreen] scale]); } -(void)setupFrameBuffer { glGenFramebuffers(1, &frameBuffer); glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer); // Check the FBO. if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) { NSLog(@"Failure with framebuffer generation: %d", glCheckFramebufferStatus(GL_FRAMEBUFFER)); } } This is how I am setting up and drawing the stencil. (Shader code below.) glEnable(GL_STENCIL_TEST); glDisable(GL_DEPTH_TEST); glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE); glDepthMask(GL_FALSE); glStencilFunc(GL_ALWAYS, 1, -1); glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE); glColorMask(0, 0, 0, 0); glClear(GL_STENCIL_BUFFER_BIT); machineForeground.shader = [StencilEffect sharedInstance]; [machineForeground draw]; machineForeground.shader = [BasicEffect sharedInstance]; glDisable(GL_STENCIL_TEST); glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE); glDepthMask(GL_TRUE); Here is where I am using the stencil. glEnable(GL_STENCIL_TEST); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glStencilFunc(GL_EQUAL, 1, -1); ...Draw Stuff here glDisable(GL_STENCIL_TEST); Finally here is my fragment shader. varying lowp vec2 TexCoordOut; uniform sampler2D Texture; void main(void) { lowp vec4 color = texture2D(Texture, TexCoordOut); if(color.a < 0.1) gl_FragColor = color; else discard; }

    Read the article

  • Change Linux console screen blanking behavior

    - by quack quixote
    How do I change the screen blanking behavior on Linux virtual terminals? For example, if I switch to a VT from X, login, and leave the system alone for 5 minutes or so, the screen will blank like a screensaver. It comes back with any keypress, like a screensaver. Mostly I just want to change the timeout, but I'm also interested in other settings. If it helps, one of my systems is running Ubuntu 10.04 with the stock graphics drivers. fbset shows the console using the radeondrmfb framebuffer device.

    Read the article

  • Change Linux console screen blanking behavior

    - by quack quixote
    How do I change the screen blanking behavior on Linux virtual terminals? For example, if I switch to a VT from X, login, and leave the system alone for 5 minutes or so, the screen will blank like a screensaver. It comes back with any keypress, like a screensaver. Mostly I just want to change the timeout, but I'm also interested in other settings. If it helps, one of my systems is running Ubuntu 10.04 with the stock graphics drivers. fbset shows the console using the radeondrmfb framebuffer device.

    Read the article

  • Change Linux Console's Default Monitor

    - by Tim M
    Is there any way to specify which monitor the console is displayed on in Linux? Details: I have a 3 monitor setup with 2 video cards. When I boot the computer, the BIOS displays on the PCI graphics card (which has a small monitor). When starting Linux, the console is displayed on the same monitor. Is there a way to have the console output on a different monitor? I'm using the vesafb framebuffer. I don't see a way in my BIOS to change the default video card.

    Read the article

  • How to develop a DirectFB app without leaving X.11 environment.

    - by Edu Felipe
    Hi folks, I'm trying to develop a GUI application for an embedded platform, without any windowing whatsoever and I'm doing that with DirectFB, and it suits my needs very fine. Since the embedded I develop for is not that powerful, I would really like to try to develop on my own Ubuntu desktop. The problem is Framebuffer is conflicting with X.org causing me to leave the whole desktop, and shutdown X.org just to see the result of my changes. Is there a good framebuffer simulator that suits my needs? Qt has one, called QVFb, but it only works for developing Qt apps, and the VNC back-end of DirectFB always crash. So, any ideas?

    Read the article

  • What is the purpose of bitdepth for the several components of the framebuffer in glfwWindowHint function of GLFW3?

    - by Rui d'Orey
    I would like to know what are the following "framebuffer related hints" of GLFW3 function glfwWindowHint : GLFW_RED_BITS GLFW_GREEN_BITS GLFW_BLUE_BITS GLFW_ALPHA_BITS GLFW_DEPTH_BITS GLFW_STENCIL_BITS What is the purpose of this? Usually their default values are enough? Where are those bits stored? In a buffer in the GPU? What do they affect? And by that I mean in what way Thank you in advance!

    Read the article

  • Omni-directional light shadow mapping with cubemaps in WebGL

    - by Winged
    First of all I must say, that I have read a lot of posts describing an usage of cubemaps, but I'm still confused about how to use them. My goal is to achieve a simple omni-directional (point) light type shading in my WebGL application. I know that there is a lot more techniques (like using Two-Hemispheres or Camera Space Shadow Mapping) which are way more efficient, but for an educational purpose cubemaps are my primary goal. Till now, I have adapted a simple shadow mapping which works with spotlights (with one exception: I don't know how to cut off the glitchy part beyond the reach of a single shadow map texture): glitchy shadow mapping<<< So for now, this is how I understand the usage of cubemaps in shadow mapping: Setup a framebuffer (in case of cubemaps - 6 framebuffers; 6 instead of 1 because every usage of framebufferTexture2D slows down an execution which is nicely described here <<<) and a texture cubemap. Also in WebGL depth components are not well supported, so I need to render it to RGBA first. this.texture = gl.createTexture(); gl.bindTexture(gl.TEXTURE_CUBE_MAP, this.texture); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR); gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR); for (var face = 0; face < 6; face++) gl.texImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, gl.RGBA, this.size, this.size, 0, gl.RGBA, gl.UNSIGNED_BYTE, null); gl.bindTexture(gl.TEXTURE_CUBE_MAP, null); this.framebuffer = []; for (face = 0; face < 6; face++) { this.framebuffer[face] = gl.createFramebuffer(); gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer[face]); gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_CUBE_MAP_POSITIVE_X + face, this.texture, 0); gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, this.depthbuffer); var e = gl.checkFramebufferStatus(gl.FRAMEBUFFER); // Check for errors if (e !== gl.FRAMEBUFFER_COMPLETE) throw "Cubemap framebuffer object is incomplete: " + e.toString(); } Setup the light and the camera (I'm not sure if should I store all of 6 view matrices and send them to shaders later, or is there a way to do it with just one view matrix). Render the scene 6 times from the light's position, each time in another direction (X, -X, Y, -Y, Z, -Z) for (var face = 0; face < 6; face++) { gl.bindFramebuffer(gl.FRAMEBUFFER, shadow.buffer.framebuffer[face]); gl.viewport(0, 0, shadow.buffer.size, shadow.buffer.size); gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT); camera.lookAt( light.position.add( cubeMapDirections[face] ) ); scene.draw(shadow.program); } In a second pass, calculate the projection a a current vertex using light's projection and view matrix. Now I don't know If should I calculate 6 of them, because of 6 faces of a cubemap. ScaleMatrix pushes the projected vertex into the 0.0 - 1.0 region. vDepthPosition = ScaleMatrix * uPMatrixFromLight * uVMatrixFromLight * vWorldVertex; In a fragment shader calculate the distance between the current vertex and the light position and check if it's deeper then the depth information read from earlier rendered shadow map. I know how to do it with a 2D Texture, but I have no idea how should I use cubemap texture here. I have read that texture lookups into cubemaps are performed by a normal vector instead of a UV coordinate. What vector should I use? Just a normalized vector pointing to the current vertex? For now, my code for this part looks like this (not working yet): float shadow = 1.0; vec3 depth = vDepthPosition.xyz / vDepthPosition.w; depth.z = length(vWorldVertex.xyz - uLightPosition) * linearDepthConstant; float shadowDepth = unpack(textureCube(uDepthMapSampler, vWorldVertex.xyz)); if (depth.z > shadowDepth) shadow = 0.5; Could you give me some hints or examples (preferably in WebGL code) how I should build it?

    Read the article

  • When to call glEnable(GL_FRAMEBUFFER_SRGB)?

    - by Steven Lu
    I have a rendering system where I draw to an FBO with a multisampled renderbuffer, then blit it to another FBO with a texture in order to resolve the samples in order to read off the texture to perform post-processing shading while drawing to the backbuffer (FBO index 0). Now I'd like to get some correct sRGB output... The problem is the behavior of the program is rather inconsistent between when I run it on OS X and Windows and this also changes depending on the machine: On Windows with the Intel HD 3000 it will not apply the sRGB nonlinearity but on my other machine with a Nvidia GTX 670 it does. On the Intel HD 3000 in OS X it will also apply it. So this probably means that I'm not setting my GL_FRAMEBUFFER_SRGB enable state at the right points in the program. However I can't seem to find any tutorials that actually tell me when I ought to enable it, they only ever mention that it's dead easy and comes at no performance cost. I am currently not loading in any textures so I haven't had a need to deal with linearizing their colors yet. To force the program to not simply spit back out the linear color values, what I have tried is simply comment out my glDisable(GL_FRAMEBUFFER_SRGB) line, which effectively means this setting is enabled for the entire pipeline, and I actually redundantly force it back on every frame. I don't know if this is correct or not. It certainly does apply a nonlinearization to the colors but I can't tell if this is getting applied twice (which would be bad). It could apply the gamma as I render to my first FBO. It could do it when I blit the first FBO to the second FBO. Why not? I've gone so far as to take screen shots of my final frame and compare raw pixel color values to the colors I set them to in the program: I set the input color to RGB(1,2,3) and the output is RGB(13,22,28). That seems like quite a lot of color compression at the low end and leads me to question if the gamma is getting applied multiple times. I have just now gone through the sRGB equation and I can verify that the conversion seems to be only applied once as linear 1/255, 2/255, and 3/255 do indeed map to sRGB 13/255, 22/255, and 28/255 using the equation 1.055*C^(1/2.4)+0.055. Given that the expansion is so large for these low color values it really should be obvious if the sRGB color transform is getting applied more than once. So, I still haven't determined what the right thing to do is. does glEnable(GL_FRAMEBUFFER_SRGB) only apply to the final framebuffer values, in which case I can just set this during my GL init routine and forget about it hereafter?

    Read the article

  • Android source code not working, reading frame buffer through glReadPixels

    - by Muhammad Ali Rajput
    Hi, I am new to Android development and have an assignment to read frame buffer data after a specified interval of time. I have come up with the following code: public class mainActivity extends Activity { Bitmap mSavedBM; private EGL10 egl; private EGLDisplay display; private EGLConfig config; private EGLSurface surface; private EGLContext eglContext; private GL11 gl; protected int width, height; //Called when the activity is first created. @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // get the screen width and height DisplayMetrics dm = new DisplayMetrics(); getWindowManager().getDefaultDisplay().getMetrics(dm); int screenWidth = dm.widthPixels; int screenHeight = dm.heightPixels; String SCREENSHOT_DIR = "/screenshots"; initGLFr(); //GlView initialized. savePixels( 0, 10, screenWidth, screenHeight, gl); //this gets the screen to the mSavedBM. saveBitmap(mSavedBM, SCREENSHOT_DIR, "capturedImage"); //Now we need to save the bitmap (the screen capture) to some location. setContentView(R.layout.main); //This displays the content on the screen } private void initGLFr() { egl = (EGL10) EGLContext.getEGL(); display = egl.eglGetDisplay(EGL10.EGL_DEFAULT_DISPLAY); int[] ver = new int[2]; egl.eglInitialize(display, ver); int[] configSpec = {EGL10.EGL_NONE}; EGLConfig[] configOut = new EGLConfig[1]; int[] nConfig = new int[1]; egl.eglChooseConfig(display, configSpec, configOut, 1, nConfig); config = configOut[0]; eglContext = egl.eglCreateContext(display, config, EGL10.EGL_NO_CONTEXT, null); surface = egl.eglCreateWindowSurface(display, config, SurfaceHolder.SURFACE_TYPE_GPU, null); egl.eglMakeCurrent(display, surface, surface, eglContext); gl = (GL11) eglContext.getGL(); } public void savePixels(int x, int y, int w, int h, GL10 gl) { if (gl == null) return; synchronized (this) { if (mSavedBM != null) { mSavedBM.recycle(); mSavedBM = null; } } int b[] = new int[w * (y + h)]; int bt[] = new int[w * h]; IntBuffer ib = IntBuffer.wrap(b); ib.position(0); gl.glReadPixels(x, 0, w, y + h, GL10.GL_RGBA,GL10.GL_UNSIGNED_BYTE,ib); for (int i = 0, k = 0; i < h; i++, k++) { //OpenGLbitmap is incompatible with Android bitmap //and so, some corrections need to be done. for (int j = 0; j < w; j++) { int pix = b[i * w + j]; int pb = (pix >> 16) & 0xff; int pr = (pix << 16) & 0x00ff0000; int pix1 = (pix & 0xff00ff00) | pr | pb; bt[(h - k - 1) * w + j] = pix1; } } Bitmap sb = Bitmap.createBitmap(bt, w, h, Bitmap.Config.ARGB_8888); synchronized (this) { mSavedBM = sb; } } static String saveBitmap(Bitmap bitmap, String dir, String baseName) { try { File sdcard = Environment.getExternalStorageDirectory(); File pictureDir = new File(sdcard, dir); pictureDir.mkdirs(); File f = null; for (int i = 1; i < 200; ++i) { String name = baseName + i + ".png"; f = new File(pictureDir, name); if (!f.exists()) { break; } } if (!f.exists()) { String name = f.getAbsolutePath(); FileOutputStream fos = new FileOutputStream(name); bitmap.compress(Bitmap.CompressFormat.PNG, 100, fos); fos.flush(); fos.close(); return name; } } catch (Exception e) { } finally { //if (fos != null) { // fos.close(); // } } return null; } } Also, if some one can direct me to better way to read the framebuffer it would be great. I am using Android 2.2 and virtual device of API level 8. I have gone through many previous discussions and have found that we can not know read frame buffer directly throuh the "/dev/graphics/fb0". Thanks, Muhammad Ali

    Read the article

  • Frame Buffers wont work with pyglet.

    - by Matthew Mitchell
    I have this code: def setup_framebuffer(surface): #Create texture if not done already if surface.texture is None: create_texture(surface) #Render child to parent if surface.frame_buffer is None: surface.frame_buffer = glGenFramebuffersEXT(1) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, surface.texture, 0) glPushAttrib(GL_VIEWPORT_BIT) glViewport(0,0,surface._scale[0],surface._scale[1]) glMatrixMode(GL_PROJECTION) glLoadIdentity() #Load the projection matrix gluOrtho2D(0,surface._scale[0],0,surface._scale[1]) glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) for this despite the second parameter printing as 1 for a test I did, I get: glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, surface.frame_buffer) I only got this after implementing pyglet. GLUT is too limited. Thank you.

    Read the article

  • glFramebufferTexture2D performance

    - by nornagon
    I'm doing heavy computation using the GPU, which involves a lot of render-to-texture operations. It's an iterative computation, so there's a lot of rendering to a texture, then rendering that texture to another texture, then rendering the second texture back to the first texture and so on, passing the texture through a shader each time. My question is: is it better to have a separate FBO for each texture I want to render into, or should I rather have one FBO and bind the target texture using glFramebufferTexture2D each time I want to change render target? My platform is OpenGL ES 2.0 on the iPhone.

    Read the article

  • How can I successfully perform hidden line removal after pass through FBO?

    - by Brett
    Hi All, I'm trying to perform hidden line removal using polygon offset fill. The code works perfectly if I render directly to the window buffer but fails to draw the lines when passed through a FBO as shown below The code I use to draw the objects void drawCubes (GLboolean removeHiddenLines) { glLineWidth(2.0); glPushMatrix(); camera.ApplyCameraTransform(); for(int i = 0; i < 50; i ++){ glPushMatrix(); cube[i].updatePerspective(); glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); glColor3f(1.0,1.0,1.0); cube[i].draw(); glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); if(removeHiddenLines){ glEnable(GL_POLYGON_OFFSET_FILL); glPolygonOffset(1.0, 1.0); glColor3f(1.0, 0.0, 0.0); //fill polygons for hidden line removal cube[i].draw(); glDisable(GL_POLYGON_OFFSET_FILL); } glPopMatrix(); } glPopMatrix(); } For this example, the first pass involves rendering to both the window buffer and a FBO. void firstPass() { glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glViewport(0, 0, fboWidth, fboHeight); glEnable(GL_DEPTH_TEST); glDisable(GL_TEXTURE_2D); drawParticleView(GL_TRUE); glDisable(GL_DEPTH_TEST); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, framebufferID[0]); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glViewport(0, 0, fboWidth, fboHeight); glEnable(GL_DEPTH_TEST); glDisable(GL_TEXTURE_2D); drawParticleView(GL_TRUE); glDisable(GL_DEPTH_TEST); } Second pass renders FBO back to window buffer. void secondPass() { glEnable(GL_TEXTURE_2D); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0); glBindTexture(GL_TEXTURE_2D, renderTextureID[0]); glViewport(fboWidth, 0, fboWidth, fboHeight); glMatrixMode(GL_PROJECTION); glLoadIdentity(); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glBegin(GL_QUADS); glTexCoord2i(0, 0); glVertex2f(-1.0f, -1.0f); glTexCoord2i(1, 0); glVertex2f(1.0f, -1.0f); glTexCoord2i(1, 1); glVertex2f(1.0f, 1.0f); glTexCoord2i(0, 1); glVertex2f(-1.0f, 1.0f); glEnd(); glDisable(GL_TEXTURE_2D); } I don't understand why the two views wouldn't be the same? Am I missing something (obviously I am)? Thanks

    Read the article

  • GPU hung when switching graphic card

    - by Lie Ryan
    I have a laptop (Dell Inspiron N4110) with a switchable graphic. $ lspci | grep VGA 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 01:00.0 VGA compatible controller: ATI Technologies Inc NI Whistler [AMD Radeon HD 6600M Series] (rev ff) Normally, my laptop starts with both graphic cards enabled, which caused the laptop to turn very hot and the fan to become very noisy. I have been using a small script to disable the Radeon card. For some time, I'm quite happy with this arrangement. However, I have been having some issues with the Intel card (IGD), the Intel card often randomly hang when running OpenGL apps; and so I want to give the Radeon card (DIS) another chance. I have never been able to switch to the Radeon card, but recently, I found out that if I do a "delayed switching" (DDIS): # echo "DDIS" > /sys/kernel/debug/vgaswitcheroo/switch root@lieryan-dell-ubuntu:/sys/kernel/debug/vgaswitcheroo# cat switch 0:IGD:+:Pwr:0000:00:02.0 1:DIS: :Pwr:0000:01:00.0 then I logoff (i.e. to restart X), the screen switch to pseudo-tty and then it stuck there freezing. At this situation, mouse and keyboard stops working so I can't switch to another ptty. I tried ssh-ing from another computer to salvage logs (dmesg at that point) and whatnot; I found out that when freezing, the active graphic card is the AMD card: -- this is from ssh -- # cat switch 0:IGD: :Off:0000:00:02.0 1:DIS:+:Pwr:0000:01:00.0 but the GPU is apparently hung, looking at dmesg gives: ... [ 1411.649974] vga_switcheroo: client 0 refused switch [ 1411.649985] vga_switcheroo: setting delayed switch to client 1 [ 1423.911759] vga_switcheroo: processing delayed switch to 1 [ 1424.006564] fbcon: Remapping primary device, fb1, to tty 1-63 [ 1424.006799] i915: switched off [ 1424.840351] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1425.718088] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1426.622377] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1427.355683] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id [ 1428.193549] [drm:drm_mode_getfb] *ERROR* invalid framebuffer id ... the invalid framebuffer id error is repeated for many times over ... I were able to successfully recover by switching back to the Intel card and restarting X from ssh; indicating that only the Radeon card has problems switching. System info: $ uname -a Linux lieryan-dell-ubuntu 3.0.0-14-generic #23-Ubuntu SMP Mon Nov 21 20:28:43 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric The laptop also do not have the option to set graphic card at BIOS and the proprietary driver, fglrx, also have never worked; when I installed it through jockey ("Additional Drivers"), glxinfo showed that it still being rendered by Mesa, the /sys/kernel/debug/vgaswitcheroo directory has gone missing, and the driver crashes with a traceback if I use xorg.conf to tell X to use fglrx. Anyone had any idea if it is possible to use this AMD card either with the radeon or the fglrx driver? logs: dmesg

    Read the article

  • Renderbuffer to GLSL shader?

    - by Dan
    I have a software that performs volume rendering through a raycasting approach. The actual raycasting shader writes the raycasted volume depth into a framebuffer object, through gl_FragDepth, that I bind before calling the shader. The problem I have is that I would like to use this depth in another shader that I call later on. I figured out that the only way to do that is to bind the framebuffer once the raycasting has finished, read the depthmap through something like glReadPixels(0, 0, m_winSize.x , m_winSize.y, GL_DEPTH_COMPONENT, GL_FLOAT, pixels); and write it to a 2D texture as usual glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, m_winSize.x, m_winSize.y, 0, GL_DEPTH_COMPONENT, GL_FLOAT, pixels) and then pass this 2D texture that contains a simple depth map to the other shader. However, I am not entirely sure that what I do is the proper way to do this. Is there anyway to pass the framebuffer that I fill up in my raycasting shader to the other shader?

    Read the article

  • FBO rendering different result between Glaxay S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • FBO rendering different result between Galaxy S2 and S3

    - by BruceJones
    I'm working on a pong game and have recently set up FBO rendering so that I can apply some post-processing shaders. This proceeds as so: Bind texture A to framebuffer Draw balls Bind texture B to framebuffer Draw texture A using fade shader on fullscreen quad Bind screen to framebuffer Draw texture B using normal textured quad shader Neither texture A or B are cleared at any point, this way the balls leave trails on screen, see below for the fade shader. Fade Shader private final String fragmentShaderCode = "precision highp float;" + "uniform sampler2D u_Texture;" + "varying vec2 v_TexCoordinate;" + "vec4 color;" + "void main(void)" + "{" + " color = texture2D(u_Texture, v_TexCoordinate);" + " color.a *= 0.8;" + " gl_FragColor = color;" + "}"; This works fine with the Samsung Galaxy S3/ Note2, but cause a strange effect doesnt work on Galaxy S2 or Note1. See pictures: Galaxy S3/Note2 Galaxy S3/Note2 Galaxy S2/Note Galaxy S2/Note Can anyone explain the difference?

    Read the article

  • OpenGLES mix and match blending options complicated question

    - by DevDevDev
    So I have a background line drawing, black and white. I want to be able to draw over this, keeping the black lines in there, and drawing only where it is white. I can do this by using glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA); And the black stays black and white gets whatever color I want. However in the white areas, I want to be able to draw some color, pick to another color and then blend. However this doesn't work because the blend function is messed up. So what I was thinking is having a linedrawing framebuffer, and a user-drawing framebuffer. The user draws into the userdrawing framebuffer, with a different blending, and then I switch blending options and draw into the linedrawing framebuffer. But i don't know enough OpenGL to say whether or not this will work or is a stupid idea. Thanks a lot

    Read the article

  • Two pass blur shader using libgdx tile map renderer

    - by Alexandre GUIDET
    I am trying to apply the following technique: blur effect using two pass shader to my libgdx game using the OrthogonalTiledMapRenderer. The idea is to blur the background wich is also a tilemap but rendered with another camera with a different zoom applied. Here is a screen capture without effect: Using the OrthogonalTiledMapRenderer sprite batch like this: backgroundMapRenderer.getSpriteBatch().setShader(shaderBlurX); backgroundMapRenderer.render(layerBackground); I get the following render: Wich is ok for X blur pass. I then try using frame buffer object like in this example. But the effect seems to be too much zoomed: I may be messing up with the camera and the zoom factor. Here is the code: private ShaderProgram shaderBlurX; private ShaderProgram shaderBlurY; private int FBO_SIZE = 800; private FrameBuffer targetA; private FrameBuffer targetB; targetA = new FrameBuffer(Pixmap.Format.RGBA8888, FBO_SIZE, FBO_SIZE, false); targetB = new FrameBuffer(Pixmap.Format.RGBA8888, FBO_SIZE, FBO_SIZE, false); targetA.begin(); Gdx.gl.glClearColor(1, 1, 1, 0); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); backgroundMapRenderer.render(layerBackground); targetA.end(); targetB.begin(); Gdx.gl.glClearColor(1, 1, 1, 0); Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT); backgroundMapRenderer.getSpriteBatch().setShader(shaderBlurX); backgroundMapRenderer.render(layerBackground); targetB.end(); TextureRegion back = new TextureRegion(targetB.getColorBufferTexture()); back.flip(false, true); backgroundMapRenderer.getSpriteBatch() .setProjectionMatrix(backgroundCamera.combined); backgroundMapRenderer.getSpriteBatch().setShader(shaderBlurY); backgroundMapRenderer.getSpriteBatch().begin(); backgroundMapRenderer.getSpriteBatch().draw(back, 0, 0); backgroundMapRenderer.getSpriteBatch().end(); I know I am making something the wrong way, but I can't find any resources about applying two passes shader using tile map renderer. Does someone know how to achieve this?

    Read the article

  • How do I reconfigure my GLES frame buffer after a rotation?

    - by Panda Pajama
    I am implementing interface rotation for my GLES based game for iOS, written in Xamarin.iOS with OpenTK. I am detecting the rotation by overriding WillRotate, in my UIViewController, and I correctly re-setup all of my projection matrices. However, when drawing a sprite, the image looks a bit blurrier on the landscape version compared to the portrait version, as you can see in the following closeups magnified 10x. Portrait (before rotating) Landscape (after rotating) In both cases, I'm using the same texture with the same sampler, the same shader, and the same GL state. I just changed the order of the parameters in the projection matrix, so the resulting sizes should be exactly the same pixelwise. Since this could be thought of as a window resize, I suppose that the framebuffer has to be recreated to the new size. When working on desktop apps on Direct3D11 (SharpDX), I would have to call swapChain.ResizeBuffers() to do this. I have tried setting AutoResize = true in my iPhoneOSGameView, but then the framebuffer gets clipped as I rotate the interface, and then everything disappears when rotating the interface again. I'm not doing anything strange, my framebuffer initialization is pretty vanilla: int scaling = (int)UIScreen.MainScreen.Scale; DeviceWidth = (int)UIScreen.MainScreen.Bounds.Width * scaling; DeviceHeight = (int)UIScreen.MainScreen.Bounds.Height * scaling; Size = new System.Drawing.Size((int)(DeviceWidth), (int)(DeviceHeight)); Bounds = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); Frame = new System.Drawing.RectangleF(0, 0, DeviceWidth, DeviceHeight); ContextRenderingApi = EAGLRenderingAPI.OpenGLES2; AutoResize = true; LayerRetainsBacking = true; LayerColorFormat = EAGLColorFormat.RGBA8; I get inconsistent results when changing Size, Bounds and Frame on my CreateFrameBuffer override, but since the documentation is so incomplete (it has nothing on Bounds and Frame), I have resorted to randomly changing stuff here and there without really knowing what is going on. There is a similar question which has no answers. However, I don't know if they're experiencing the same problem as I am. Is my supposition that recreating the framebuffer is necessary, correct? If so, does anybody know how to do it correctly in OpenTK for Xamarin.iOS?

    Read the article

  • (LWJGL) Pixel Unpack Buffer Object is Disabled? (glTextImage2D)

    - by OstlerDev
    I am trying to create a render target for my game so that I can re-render at a different screen size. But I am receiving the following error: Exception in thread "main" org.lwjgl.opengl.OpenGLException: Cannot use offsets when Pixel Unpack Buffer Object is disabled Here is the source code for my Render method: // clear screen GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT); // Start FBO Rendering Code // The framebuffer, which regroups 0, 1, or more textures, and 0 or 1 depth buffer. int FramebufferName = GL30.glGenFramebuffers(); GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, FramebufferName); // The texture we're going to render to int renderedTexture = glGenTextures(); // "Bind" the newly created texture : all future texture functions will modify this texture glBindTexture(GL_TEXTURE_2D, renderedTexture); // Give an empty image to OpenGL ( the last "0" ) glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); // Poor filtering. Needed ! glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // Set "renderedTexture" as our colour attachement #0 GL32.glFramebufferTexture(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0, renderedTexture, 0); // Set the list of draw buffers. IntBuffer drawBuffer = BufferUtils.createIntBuffer(20 * 20); GL20.glDrawBuffers(drawBuffer); // Always check that our framebuffer is ok if(GL30.glCheckFramebufferStatus(GL30.GL_FRAMEBUFFER) != GL30.GL_FRAMEBUFFER_COMPLETE){ System.out.println("Framebuffer was not created successfully! Exiting!"); return; } // Resets the current viewport GL11.glViewport(0, 0, scaleWidth*scale, scaleHeight*scale); GL11.glMatrixMode(GL11.GL_MODELVIEW); GL11.glLoadIdentity(); // let subsystem paint if (callback != null) { callback.frameRendering(); } // update window contents Display.update(); It is crashing on this line: glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB, 1024, 768, 0,GL_RGB, GL_UNSIGNED_BYTE, 0); I am not really sure why it is crashing and looking around I have not been able to find out why. Any help or insight would be greatly welcome.

    Read the article

  • TTY with 256 colors?

    - by timn
    With URxvt and xterm it is possible to use a virtual terminal supporting 256 colors instead of only eight. Since my Intel GMA graphics card is well-supported by the KMS framebuffer driver, I am exclusively working on the TTY. Unfortunately it only supports eight colors although with MPlayer (-vo fbdev/fbdev2) and other framebuffer tools far more can be addressed. Is there a way to tell the TTY to use more than eight colors?

    Read the article

  • GDI fast scroll

    - by genesys
    Hi! I use GDI to create some custom textwidget. I draw directly to the screen, unbuffered. now i'd like to implement some fast scrolling, that simply pixelshifts the respective part of the framebuffer (and only redraws the newly visible lines). I noticed that for example the rich text controls does it like this. If i use some GDI drawing functions to directly draw to the framebuffer, over a rich text control, and then scroll the rich text, it will also scroll my drawing along with the text. so i assume the rich text simply pixelshifts it's part of the framebuffer. I'd like to do the same, but don't know how to do so. Can someone help? (independant of programming language)) thanks!

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >