Search Results

Search found 2513 results on 101 pages for 'opengl'.

Page 14/101 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • OpenGL: Drawing to a texture

    - by Danran
    Well im just a bit stuck wondering how to draw an item to a texture. Specifically, i'm using; glDrawArrays(GL_LINE_STRIP, indices[0], indices.size()); Because what i'm drawing via the above function updates every-frame, i'm just totally not sure how to go about drawing what i have to a texture. Any help is greatly appreciated! Edit: Well unfortunately my graphics card doesn't support FrameBuffer Objects :/. So i've been trying to get the copy contents from backbuffer method working. Here's what i currently have; http://pastebin.com/dJpPt6Pd And sadly all i get is a white square. Its probably something stupid that i'm doing wrong. Just unsure what it could be?

    Read the article

  • Fog with Blend in OpenGL

    - by MhdAljobory
    I want to add fog in my scene which contain transparent textures made by Blend , when i enable the fog the transparent textures appear white From a distance but when i disable it the textures appear well. What is the solution to the problem of whiteness? Fog Code: GLfloat fogColor[4]= {0.5f, 0.5f, 0.5f, 1.0f}; glClearColor(0.5f,0.5f,0.5f,1.0f); glFogi(GL_FOG_MODE, GL_LINEAR); glFogfv(GL_FOG_COLOR, fogColor); glFogf(GL_FOG_DENSITY, 0.35f); glHint(GL_FOG_HINT, GL_DONT_CARE); glFogf(GL_FOG_START, 1.0f); glFogf(GL_FOG_END, 1000.0f); glEnable(GL_FOG); Screenshot

    Read the article

  • OpenGL: Move camera regardless of rotation

    - by Markus
    For a 2D board game I'd like to move and rotate an orthogonal camera in coordinates given in a reference system (window space), but simply can't get it to work. The idea is that the user can drag the camera over a surface, rotate and scale it. Rotation and scaling should always be around the center of the current viewport. The camera is set up as: gl.glMatrixMode(GL2.GL_PROJECTION); gl.glLoadIdentity(); gl.glOrtho(-width/2, width/2, -height/2, height/2, nearPlane, farPlane); where width and height are equal to the viewport's width and height, so that 1 unit is one pixel when no zoom is applied. Since these transformations usually mean (scaling and) translating the world, then rotating it, the implementation is: gl.glMatrixMode(GL2.GL_MODELVIEW); gl.glLoadIdentity(); gl.glRotatef(rotation, 0, 0, 1); // e.g. 45° gl.glTranslatef(x, y, 0); // e.g. +10 for 10px right, -2 for 2px down gl.glScalef(zoomFactor, zoomFactor, zoomFactor); // e.g. scale by 1.5 That however has the nasty side effect that translations are transformed as well, that is applied in world coordinates. If I rotate around 90° and translate again, X and Y axis are swapped. If I reorder the transformations so they read gl.glTranslatef(x, y, 0); gl.glScalef(zoomFactor, zoomFactor, zoomFactor); gl.glRotatef(rotation, 0, 0, 1); the translation will be applied correctly (in reference space, so translation along x always visually moves the camera sideways) but rotation and scaling are now performed around origin. It shouldn't be too hard, so what is it I'm missing?

    Read the article

  • OpenGL : Keeping alpha in a render buffer

    - by Cyan
    In my current task, i need to render a texture into a render buffer, in order to work on it (apply special filters) there. The result is then considered a "new texture", which is later displayed. This works fine, except when the texture contains some transparent/semi-transparent parts. My current guess it that, within the render buffer, the texture is "merged" with a kind of "grey background". In this case, it obviously impacts the R,G,B color components of transparent pixels. I've yet to find a way around this. Even manually assigning alpha after the rendering process doesn't save the day for semi-transparent pixels, which RGB are "tainted" by the grey background.

    Read the article

  • Using a texture as an integer array (OpenGL 3.3, shader version 3.3)

    - by Cubic
    I'm trying to have something like an integer array uniform for my fragment shader (I only need read access). Since it's a fairly large chunk of data (not so large that uploading it in every frame would be impossible, but enough to make me want to rather not do it). Essentially I want to just pass it a uniform telling the shader where this "array" is. I believe I can use a 1D texture for this, but I don't know how (actually, I don't know how to do many things because I just can't seem to find a reference for GLSL 3.3, I only ever find references for the C API). This sounds like a rather basic question and I'm sure it's been answered already somewhere, but I keep searching and can't quite find what I'm looking for.

    Read the article

  • How does the iPhone know which OpenGL ES context to use between 1.1 and 2.0?

    - by Moshe
    I've been digging around the net recently and noticed some video tutorials show an older template (pre SDK 3.2) with one OpenGL ES context. Now there are two of them, which, I've gleaned are the two versions of OpenGL ES available on the newer iMobile devices. Can I just use the older one or do I need to do everything twice? How do I tell the iPhone to use the older context, or will it do so automatically?

    Read the article

  • mixing OpenGL and Interface Builder/ UI Controls - bad idea? Why? (iPhone)

    - by Adam
    I've heard that OpenGL ES and standard iPhone UI controls don't play well together, but I'm wondering if anyone knows why, and what the effects are? I'm writing an OpenGL based game, and the view is loaded from a nib file with ui controls, and it seems to work ok, but the game is really simple at this point... does using ui controls cause some kind of performance hit?

    Read the article

  • How to stop OpenGL from applying blending to certain content? (see pics)

    - by RexOnRoids
    Supporting Info: I use cocos2d to draw a sprite (graph background) on the screen (z:-1). I then use cocos2d to draw lines/points (z:0) on top of the background -- and make some calls to OpenGL blending functions before the drawing to SMOOTH out the lines. Problem: The problem is that: aside from producing smooth lines/points, calling these OpenGL blending functions seems to "degrade" the underlying sprite (graph background). As you can see from the images below, the "degraded" background seems to be made darker and less sharp in Case 2. So there is a tradeoff: I can either have (Case 1) a nice background and choppy lines/points, or I can have (Case 2) nice smooth lines/points and a degraded background. But obviously I need both. THE QUESTION: How do I set OpenGL so as to only apply the blending to the layer with the Lines/Points in it and thus leave the background alone? The Code: I have included code of the draw() method of the CCLayer for both cases explained above. As you can see, the code producing the difference between Case 1 and Case 2 seems to be 1 or 2 lines involving OpenGL Blending. Case 1 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Commenting out the following two lines induces a problem of making it impossible to have smooth lines/points, but has merit in that it does not degrade the background sprite.*/ //glEnable (GL_BLEND); //glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } } Case 2 -- MainScene.h (CCLayer): -(void)draw{ int lastPointX = 0; int lastPointY = 0; GLfloat colorMAX = 255.0f; GLfloat valR; GLfloat valG; GLfloat valB; if([self.myGraphManager ready]){ valR = (255.0f/colorMAX)*1.0f; valG = (255.0f/colorMAX)*1.0f; valB = (255.0f/colorMAX)*1.0f; NSEnumerator *enumerator = [[self.myGraphManager.currentCanvas graphPoints] objectEnumerator]; GraphPoint* object; while ((object = [enumerator nextObject])) { if(object.filled){ /*Enabling the following two lines gives nice smooth lines/points, but has a problem in that it degrades the background sprite.*/ glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glHint (GL_LINE_SMOOTH_HINT, GL_DONT_CARE); glEnable (GL_LINE_SMOOTH); glLineWidth(1.5f); glColor4f(valR, valG, valB, 1.0); ccDrawLine(ccp(lastPointX, lastPointY), ccp(object.position.x, object.position.y)); lastPointX = object.position.x; lastPointY = object.position.y; glPointSize(3.0f); glEnable(GL_POINT_SMOOTH); glHint(GL_POINT_SMOOTH_HINT, GL_NICEST); ccDrawPoint(ccp(lastPointX, lastPointY)); } } } }

    Read the article

  • What features does D3D have that OpenGL does not (and vice versa)?

    - by Tom
    Are there any feature comparisons on Direct3D 11 and the newest OpenGL versions? Well, simply put, Direct3D 11 introduced three main features (taken from Wikipedia): Tesselation Multithreaded rendering Compute shaders Increased texture cache Now I'm wondering, how does the newest versions of OpenGL cope with these features? And since I have this feeling that there are features that Direct3D lacks from OpenGL's side, what are those?

    Read the article

  • What is the situation about OpenGL under Ubuntu Unity and Gnome3?

    - by user827992
    In a GNU/linux distribution is usually installed Xorg as main graphical server, it operates with a client-server logic, a special windows is designate as desktop environment and this special windows can handle all the eyecandy stuff like decorations, icons and effects. The problem is that the latest UI heavily relies on hardware acceleration, Unity is an overlay on Compiz and the Gnome-shell also require an active driver for the GPU to work well: the problem is: on the same OS I can find multiple implementations of OpenGL who is handling my OpenGL buffer? how the OpenGL buffer is managed compared to the other windows? how can I be sure that my OpenGL implementation is glued to the hardware and is not related to the client-server logic of Xorg? For example I have tried the clutter library and I have only experienced problems under both Unity and GTK/Gnome, no problem under other OS.

    Read the article

  • Should I use OpenGL while working with the C++ Software?

    - by Paralytic
    I am completely new to programming and game development for that matter. I am using the C++ software to create my Game Engine with the help of a beginners guide. I noticed it has a OpenGL option when starting up a new project. I've heard of OpenGL pertaining to game development, not sure what it is though. Should I be using OpenGL when creating my Game Engine? Will it matter if I just start with a blank slate?

    Read the article

  • Will OpenGL give me any FPS improvement over CoreAnimation for scrolling a large image?

    - by Ben Roberts
    Hi, I'm considering re-writing the menu system of my iPhone app to use Open GL just to improve the smoothness of scrolling a big image (480x1900px) across the screen. I'm looking at doing this as a way to improve on using the method/solution as described here (http://stackoverflow.com/questions/1443140/smoother-uiview). This solution was a big improvement over the previous implementation but it's still not perfect and as this is the first thing the user will see I'd like it to be as flawless as possible. Will switching to OpenGL give me the sort of smooth scrolling I'm looking for? I've stayed clear of OpenGL until now as this is my first app and core animation has handled everything else I've thrown at it well enough, would be good to know if this alternative implementation is likely to work! thanks

    Read the article

  • Using OpenGL drawing operations in an object-oriented setting?

    - by Lion Kabob
    I've been plowing through basic shaders and whatnot for an application I'm writing, and I've been having trouble figuring out a high-level organization for the drawing calls. I'm thinking of having a singleton class which implements a number of basic drawing operations, taking data from "user" classes and passing that to the appropriate opengl calls. I'm wondering how people do this when writing their own applications, as the internet is chock full of basic "Your first shader" tutorials, but very little on suggested organization of drawing code. My particular environment is targeted at iPad/OpenGL ES 2.0, but I think the question stands for most environments.

    Read the article

  • Whats the minimum iOS version which supports OpenGL ES2.0 ?

    - by Shireesh Agrawal
    Hi, I am not sure if the question even makes sense. I am writing an iPhone game which uses Opengl ES 2.0. I know that OpenGL ES 2.0 is supported on 3gs and higher. Is there a minimum requirement for iOS version too, like the device needs to have iOS 3.1.3 or higher? Or does it solely depend on the hardware? Thanks! -shireesh p.s. I tried to search on the net but havent found much, perhaps I am not using the right keywords

    Read the article

  • Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

    - by metaleap
    I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.) According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION: "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point "if a geometry shader is active and mode? is incompatible with [...]" -- nope, no geometry shaders as of now. Furthermore: I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway). It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...) For what it's worth, here's the relevant routine excerpted from the render loop, in Golang: func (me *TMesh) render () { curMesh = me curTechnique.OnRenderMesh() gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf) if me.glElemBuf > 0 { gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf) gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil)) gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0) } else { gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil)) /* BOOM! */ gl.DrawArrays(me.glMode, 0, me.glNumVerts) } gl.BindBuffer(gl.ARRAY_BUFFER, 0) } So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here. What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

    Read the article

  • How Can I Compress Texture in OpenGL on iPhone/iPad?

    - by nonamelive
    Hi, I'm making an iPad app which needs OpenGL to do a flip animation. I have a front image texture and a back image texture. Both the two textures are screenshots. // Capture an image of the screen UIGraphicsBeginImageContext(view.bounds.size); [view.layer renderInContext:UIGraphicsGetCurrentContext()]; image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); // Allocate some memory for the texture GLubyte *textureData = (GLubyte*)calloc(maxTextureSize*4, maxTextureSize); // Create a drawing context to draw image into texture memory CGContextRef textureContext = CGBitmapContextCreate(textureData, maxTextureSize, maxTextureSize, 8, maxTextureSize*4, CGImageGetColorSpace(image.CGImage), kCGImageAlphaPremultipliedLast); CGContextDrawImage(textureContext, CGRectMake(0, maxTextureSize-size.height, size.width, size.height), image.CGImage); CGContextRelease(textureContext); // ...done creating the texture data [EAGLContext setCurrentContext:context]; glGenTextures(1, &textureToView); glBindTexture(GL_TEXTURE_2D, textureToView); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, maxTextureSize, maxTextureSize, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData); // free texture data which is by now copied into the GL context free(textureData); Each of the texture takes up about 8MB memory, which is unacceptable for an iPhone/iPad app. Could anyone tell me how can I compress the texture to reduce the memory. I'm a complete newbie to OpenGL. Any help would be appreciated!

    Read the article

  • Porting a project to OpenGL3

    - by Decapsuleur
    Hi everyone, I'm working on a C++ cross-platform OpenGL application (Windows, Linux and MacOS) and I am wondering if some of you could share some advices on porting a large application to OpenGL 3. The reason I am looking into OpenGL 3 is because I think we could benefit a lot from using the new "Sync objects". Nvidia has supported such an extension since the Geforce 256 days (gl_nv_fences) but there seems to be no equivalent functionality on ATI hardware before OpenGL 3.0+... Our code makes quite heavy use of glut/freeglut, glu functions, OpenGL 2 extensions and CUDA (on supported hardware). The problem I am now facing is that "gl3.h" and "gl.h" are mutually incompatible (as stated in gl3.h). Do you guys know if there is a GL3 glut equivalent ? Also, looking at the CUDA-toolkit header files, it seems that GL-CUDA interoperability is only available when using older versions of OpenGL... (cuda_gl_interop.h includes gl.h...). Am I missing something ? Thanks a lot for your help.

    Read the article

  • How to detect OpenGL capabilities without creating a GLSurfaceView (Android)

    - by ADB
    I am trying to access the OpenGL capability of the phone before deciding whether to use OpenGL or Canvas for graphics puposes. However, all the functions that I can read documentation on requires you to already have a valid OpenGL context (namely, create a GLSurfaceView and assign it a rendered. Then check the OpenGL parameters in the onSurfaceCreated). So, is there a way to check the extensions, renderer name and max texture size capability of the phone BEFORE having to create any OpenGL views?

    Read the article

  • about the JOGL 2 problem

    - by Chuchinyi
    Please some help me about the JOGL 2 problem(Sorry for previous error format). I complied JOGL2Template.java ok. but execut it with following error. D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more there is JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } }

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • c++ most used libraries [on hold]

    - by Basaa
    I'm trying to find out whether or not I want to switch from Java to c++ for my OpenGL game programming. I now have setup a test project in VS 11 professional, with GLUT. I created my windows with GLUT, and I can render OpenGL primitives without any problems. Now my question: What library(s) is/are used mostly in the indie/semi professional industry for using OpenGL in c++? With 'using OpenGL' I mean: Creating and managing an OpenGL window Actually using the OpenGL API Handling user-input (keyboard/mouse)

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >